Skip to content

Non-record: 11L GEPA + 20k Steps + Pure Int6 + Legal TTT (val_bpb=1.0983): unlimited compute: 4×A100-40GB, ~2.8 hours#628

Open
Christopher-Lee-McClendon wants to merge 2 commits intoopenai:mainfrom
Christopher-Lee-McClendon:submission/11L-gepa-20k-pure-int6-legal-ttt
Open

Non-record: 11L GEPA + 20k Steps + Pure Int6 + Legal TTT (val_bpb=1.0983): unlimited compute: 4×A100-40GB, ~2.8 hours#628
Christopher-Lee-McClendon wants to merge 2 commits intoopenai:mainfrom
Christopher-Lee-McClendon:submission/11L-gepa-20k-pure-int6-legal-ttt

Conversation

@Christopher-Lee-McClendon
Copy link
Copy Markdown

@Christopher-Lee-McClendon Christopher-Lee-McClendon commented Mar 24, 2026

11L GEPA + 20k Steps + Pure Int6 + Legal TTT → 1.0983 BPP

Non-record unlimited-compute submission. Breaks 1.10 BPP with legal score-first TTT.

Key Numbers

Metric Value
val_bpb 1.0983
Pre-TTT float 1.1153
Quantized (int6) ~1.142
TTT gain (quant→final) −0.044
Artifact 14.29 MB
Training 20k steps, 4×A100, ~2.8h

Scaling Table

Steps Peak-LR Warmdown Float TTT BPP Artifact
9,000 5,000 4,000 1.135 1.116 14.94 MB
12,000 7,000 5,000 1.127 1.108 14.79 MB
15,000 9,000 6,000 1.122 1.104 14.52 MB
20,000 12,000 8,000 1.115 1.098 14.29 MB

Research Contributions (5 transferable findings)

  1. Warmdown is a first-class variable — The model plateaus at ~1.216 BPP during late peak-LR (steps 7k–12k, ~2 BPB/kstep), then warmdown delivers −0.101 BPB at 12.6 BPB/kstep — roughly 6× the late-plateau rate. Warmdown isn't cleanup; it's where most remaining gain originates once the plateau sets in.

  2. Better-trained models compress smaller — 20k-step model → 14.29 MB (smallest artifact), despite identical architecture and quantization. Optimization quality improves weight compressibility, not just float loss.

  3. SGD >> AdamW for legal TTT (controlled comparison) — On the same 5.2k-step, 24.6M-param base model: SGD+momentum delivers 2.4× the TTT gain of AdamW (−0.017 vs −0.007 float→final). Adam's moments can't converge in ~30 steps/chunk. Separately, the 20k GEPA model's −0.044 TTT gain is measured from a different baseline (quant→final) and different architecture, so should not be directly compared.

  4. Freezing early layers is active regularization — Freezing 2 of 11 blocks (~18% of depth) during TTT isn't just catastrophic-forgetting defense. Early layers hold generic features; later layers are the better adaptation surface. Even though freezing removes parameters, the model adapts better.

  5. After the right TTT family, invest in the base model — TTT's share of total gain over naive baseline shrinks from 22% (5.2k-step base) to 13% (20k-step base). The big jump came from choosing the right TTT regime (SGD + freeze + multi-epoch). After that, base model quality delivers more BPB per unit of effort than TTT micro-tuning.

What Transfers to Record Track

✅ Warmdown emphasis (≥40% of total steps)
✅ GPTQ-lite / pure int6
✅ SGD-based legal TTT (2.4× gain over AdamW, validated on same base)
✅ Freeze-early-blocks as TTT regularization

⚠️ Less transferable: very long training curves, large eval-time TTT budgets (10–30 epochs → 2000–3600s eval)

Open Frontiers

The local TTT recipe appears mostly saturated. Next questions are structural: stream vs. document-based adaptation, self-distillation at test time, quantization-aware TTT, and base-training scaling laws under fixed 16 MB budget.

Full analysis with all tables and derivations in README.md.

Prior Non-Record Submissions

Acknowledgments

Builds on techniques from: @signalrush (PR #414, GPTQ-lite/EMA), @jfprincz (PRs #287/#315, XSA/Partial RoPE/LN Scale), @unnir (PR #265, Efficient XSA), raahilshah (PR #162, SmearGate/BigramHash), @aruniyer (PR #86, Int6 QAT), samacqua (LoRA TTT), @abaybektursun (PR #549, LeakyReLU²), and the OpenAI baseline.

- Non-record unlimited-compute submission: val_bpb=1.0983 (below 1.10)
- 20000-step training (12000 peak-LR + 8000 warmdown) on 4xA100-40GB
- Pure int6 per-row quantization with 15-candidate GPTQ-lite + zstd-22
- Legal score-first TTT (SGD, 10 epochs, momentum 0.9): -0.044 BPP gain
- Float base 1.1153, artifact 14.29 MB (14,985,742 bytes)
@Christopher-Lee-McClendon Christopher-Lee-McClendon changed the title Non-record: 11L GEPA + 20k Steps + Pure Int6 + Legal TTT (val_bpb=1.0983) Non-record: 11L GEPA + 20k Steps + Pure Int6 + Legal TTT (val_bpb=1.0983): unlimited compute: 4×A100-40GB, ~2.8 hours Mar 24, 2026
- Finding 1: Warmdown is a first-class variable (6x late-plateau rate)
- Finding 2: Better-trained models compress smaller
- Finding 3: SGD >> AdamW for legal TTT (2.4x gain, same base)
- Finding 4: Freeze-early-layers is active regularization
- Finding 5: After right TTT family, invest in base model
- What transfers to record track section
- Open frontiers section
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant